Boring Infinite Descent

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Infinite Descent on Elliptic Curves

We present an algorithm for computing an upper bound for the difference of the logarithmic height and the canonical height on elliptic curves. Moreover a new method for performing the infinite descent on elliptic curves is given, using ideas from the geometry of numbers. These algorithms are practical and are demonstrated by a few examples.

متن کامل

InfiniteBoost: building infinite ensembles with gradient descent

In machine learning ensemble methods have demonstrated high accuracy for the variety of problems in different areas. The most known algorithms intensively used in practice are random forests and gradient boosting. In this paper we present InfiniteBoost — a novel algorithm, which combines the best properties of these two approaches. The algorithm constructs the ensemble of trees for which two pr...

متن کامل

The Method of Infinite Descent in Stable

This paper is a continuation of [Rav02] and the second in a series of papers intended to clarify and expand results of the last chapter of [Rav04], in which we described a method for computing the Adams-Novikov E2-term and used it to determine the stable homotopy groups of spheres through dimension 108 for p = 3 and 999 for p = 5. The latter computation was a substantial improvement over prior ...

متن کامل

Sequent calculi for induction and infinite descent

This paper formalises and compares two different styles of reasoning with inductively defined predicates, each style being encapsulated by a corresponding sequent calculus proof system. The first system, LKID, supports traditional proof by induction, with induction rules formulated as rules for introducing inductively defined predicates on the left of sequents. We show LKID to be cut-free compl...

متن کامل

Stochastic Particle Gradient Descent for Infinite Ensembles

The superior performance of ensemble methods with infinite models are well known. Most of these methods are based on optimization problems in infinite-dimensional spaces with some regularization, for instance, boosting methods and convex neural networks use L1-regularization with the non-negative constraint. However, due to the difficulty of handling L1-regularization, these problems require ea...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Metaphilosophy

سال: 2014

ISSN: 0026-1068

DOI: 10.1111/meta.12084